Uniform convergence rates for Lipschitz learning on graphs
نویسندگان
چکیده
Lipschitz learning is a graph-based semi-supervised method where one extends labels from labeled to an unlabeled data set by solving the infinity Laplace equation on weighted graph. In this work we prove uniform convergence rates for solutions of graph as number vertices grows infinity. Their continuum limits are absolutely minimizing extensions with respect geodesic metric domain sampled from. We under very general assumptions weights, vertices, and domain. Our main contribution that obtain quantitative even sparsely connected graphs, they typically appear in applications like learning. particular, our framework allows bandwidths down connectivity radius. For proving first show statement distance functions continuum. Using "comparison functions" principle, can pass these statements harmonic extensions.
منابع مشابه
Algorithms for Lipschitz Learning on Graphs
We develop fast algorithms for solving regression problems on graphs where one is given the value of a function at some vertices, and must find its smoothest possible extension to all vertices. The extension we compute is the absolutely minimal Lipschitz extension, and is the limit for large p of p-Laplacian regularization. We present an algorithm that computes a minimal Lipschitz extension in ...
متن کاملUniform Convergence Rates for Kernel Density Estimation
Kernel density estimation (KDE) is a popular nonparametric density estimation method. We (1) derive finite-sample high-probability density estimation bounds for multivariate KDE under mild density assumptions which hold uniformly in x ∈ R and bandwidth matrices. We apply these results to (2) mode, (3) density level set, and (4) class probability estimation and attain optimal rates up to logarit...
متن کاملA note on “Convergence rates and asymptotic normality for series estimators”: uniform convergence rates
This paper establishes improved uniform convergence rates for series estimators. Series estimators are least-squares fits of a regression function where the number of regressors depends on sample size. I will specialize my results to the cases of polynomials and regression splines. These results improve upon results obtained earlier by Newey, yet fail to attain the optimal rates of convergence....
متن کاملOn convergence rates of Gibbs samplers for uniform distributionsbyGareth
We consider a Gibbs sampler applied to the uniform distribution on a bounded region R R d. We show that the convergence properties of the Gibbs sampler depend greatly on the smoothness of the boundary of R. Indeed, for suuciently smooth boundaries the sampler is uniformly ergodic, while for jagged boundaries the sampler could fail to even be geometrically ergodic.
متن کاملOn Convergence Rates of Gibbs Samplers for Uniform Distributions
We consider a Gibbs sampler applied to the uniform distribution on a bounded region R ⊆ R. We show that the convergence properties of the Gibbs sampler depend greatly on the smoothness of the boundary of R. Indeed, for sufficiently smooth boundaries the sampler is uniformly ergodic, while for jagged boundaries the sampler could fail to even be geometrically ergodic.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Ima Journal of Numerical Analysis
سال: 2022
ISSN: ['1464-3642', '0272-4979']
DOI: https://doi.org/10.1093/imanum/drac048